Machine Learning for OpenCV by Michael Beyeler

Machine Learning for OpenCV by Michael Beyeler

Author:Michael Beyeler
Language: eng
Format: epub, mobi
Tags: COM016000 - COMPUTERS / Computer Vision and Pattern Recognition, COM044000 - COMPUTERS / Neural Networks, COM004000 - COMPUTERS / Intelligence (AI) and Semantics
Publisher: Packt
Published: 2018-02-26T07:13:30+00:00


Detected pedestrians in a test image

Further improving the model

Although the RBF kernel makes for a good default kernel, it is not always the one that works best for our problem. The only real way to know which kernel works best on our data is to try them all and compare the classification performance across models. There are strategic ways to perform this so called hyperparameter tuning, which we'll talk about in detail in Chapter 11, Selecting the Right Model with Hyper-Parameter Tuning.

What if we don't know how to do hyperparameter tuning properly yet?

Well, I'm sure you remember the first step in data understanding, visualize the data. Visualizing the data could help us understand if a linear SVM were powerful enough to classify the data, in which case there would be no need for a more complex model. After all, we know that if we can draw a straight line to separate the data, a linear classifier is all we need. However, for a more complex problem, we would have to think harder about what shape the optimal decision boundary should have.

In more general terms, we should think about the geometry of our problem. Some questions we might ask ourselves are:

Does visualizing the dataset make it obvious which kind of decision rule would work best? For example, is a straight line good enough (in which case, we would use a linear SVM)? Can we group different data points into blobs or hotspots (in which case, we would use an RBF kernel)?

Is there a family of data transformations that do not fundamentally change our problem? For example, could we flip the plot on its head, or rotate it, and get the same result? Our kernel should reflect that.

Is there a way to preprocess our data that we haven't tried yet? Spending some time on feature engineering can make the classification problem much easier in the end. Perhaps we might even be able to use a linear SVM on the transformed data.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.